Comment: We heard that we had to forget about all previous messages about indicators etc. from USAid.
Q: About farmer satisfaction - what about scientists' satisfaction in producing materials (that are not recognized) - where do they fit here? Measuring outputs of scientists in the project? A: I doubt that USAid would track this kind of indicators. There could be new indicators entered in but it wouldn't go past the stage of being entered in. What's very effective to increase/maintain support are compelling narratives which might include custom indicators.
Q: What about social capital indicators about e.g. sharing power? Are there any in there?
A: No. A lot of people nominated indicators and they made choices on what is generalizable across projects etc. This kind of specific indicators is useful in a given project but not in FtF.
Timeline: moving on from characterization etc. to evaluation and survey design until baseline survey. In M&E we differentiate monitoring (keeping track of the program outputs) and evaluation (the impact of Africa RISING) and then creating knowledge of what works (M&E) to assess effectiveness, rank policy/project alternatives etc.
For monitoring we need constant and timely information and close collaboration among CG implementers and IFPRI. What to do: Outcome mapping? cost-benefit analysis? Surveys? And evaluation must be thought in advance...
For evaluation: How would you go about measuring the causal impact of Africa RISING on... productivity? We need to look at impact in the sense of how it differs between program beneficiaries and non-beneficiaries.
We are assuming that all farmers in target villages are beneficiaries.
Questions from the presenter:
What questions would we like to answer?
How to adapt FtF indicators to Africa RISING?
Ethics for control (same as placebo for medicine)?
Choice of outcome indicators & variables?
Sampling frame for randomization?
Sample design?
Statistical power for causal impact?
Q: The way we frame our work could affect our measurement. Preaching productivity as an output is a problem, conceptually. We frame resarch outputs as varieties that are bred etc. and then we engage in adoption pathways etc. once they adopt it you get to see productivity growth etc. A: I'm only pointing out that productivity increase is one of the major outcomes. We have to look at adoption. Our theory of change is that productivity should increase.
Q: There's a lot of questions about terminology e.g. outputs. People have a different perceptions about e.g. inputs (in the presentation graph, it's a donor perspective that inputs are funding). I agree that productivity increase happens as an outcome.
Q: The research framework looks at situational analysis (looking at household characterization), the testing of technology, scaling out (process to take from outputs to outcomes) etc. so the output is one definition of how we think we might move to outcomes.
A: Along these lines, evaluation considers that productivity increase is an outcome.
We have to evaluate if farmers are adopting technology. We might want to look into facilitating spillover effects as well for non beneficiary farmers in treatment sites. DfID is interested in another parallel program on scaling and market.
USAid is also interested in linking up with other programs, USAid missions etc. so we need a larger design. It gets big and messy but by focusing on this it allows us to look into other scales. Africa RISING should attract funding from other donors or missions.
RCTs don't usually give you useful information, unless if you link them with reducing farmer ??
At ATA we are testing hypotheses that certain approaches are better than others. Research and on-farm trials have shown that raw planting generates high yield. We've done this, selected farmers and are testing hypothesis that raw planting is superior. There are market and other implications. It gets complicated but we want to validate in reality if a technology is superior or not.
Comment: all these pieces can fit together. We have to think about what Africa RISING could do... We have to be clear on which questions we're testing.
Q: Open issue - which data to collect and who will collect the data?
A: This will be addressed next.
Q: For technologies, there's no way to trick farmers about fake technology...
A: The assumption is that we should compare technology which has been scientifically tested and 'fake technology' which means technology that is simply not scientifically tested.
--> But in the short run it's going to damage us in the short run.
--> We can also use placebo in time.
Leonard: we have a presentation about RCTs etc. which could address this.
Presentations about research design and opportunities for evaluation
Q: How can you evaluate impact that satisfies the demand (to qualify that you are demand-responsive)?
A: We can aggregate data re: market demand in that area.
In a demand side intervention, we offer an 'intention to treat'.
Q: Do you have a typology e.g. farmers produce, sell and consume sthg...?
A: By characterizing farmers , looking at avg treatment effect we can look at various criteria.
We can do ?? with 3000 farmers and down to 10 households. CRS, World Vision etc. will do the rest of this survey.
--> But we talked about co-investment proposals etc. How do we do this? --> Do we want to generate this evidence or do we stick with our traditional zones? Re: partnerships, partners have resources to reach those other farmers etc. --> I don't see how this fits with our activities... We'll do market studies etc. and then value chain analysis (the demand side) and combine with technologies, evaluate with a range of hh's and then control and develop our evidence and THEN we can feed into that process. We're dealing with whole system research.
--> Nobody says we will go to 300 villages but our evaluation has to be credible. --> Yes but we need to find a way to combine the two.
RCT is icing on the cake.
Q: Implementers can help us? How does that tie in with our evaluation design? Will they be collecting information for us?
A: My guess is: Nafaka will do implementation anyhow. The way they're working is not about having good science - that's where we can influence them.They have already run surveys etc. but I'm wondering how this works with the IFPRI M&E work.
--> Are they doing the implementation of the research part or only the value chain part? We can put a research supervisor in Morogoro.
Q: Randomization ok but the sampling you have is from 2000. It's 12 years ago... households have moved on. How can you claim it's still random if it's so old?
A: The agricultural subsidy program updates these figures every year so it's up to date data.
We have to get treatment and control villages that are as similar as possible. We need legitimate counterfactuals. PSM is a second best option... If you want to scale up the program we shouldn't work on a self-selected sample to avoid the bias.
We should have a bigger sample size of control group.
If you randomize districts etc. then it's more likely that the districts are more different so this could bias the impact evaluation... Why don't we randomize villages with similar agro-ecological conditions? --> But the district is where the Multistakeholder platform is.
We cannot use Nafaka data, can we? --> Yes we should talk to them but if we are responsible to have a rigorous evaluation, we should have control over their research design...
Q: You said Nafaka are implementing and you collect data?
A: It's a possibility but we need to discuss it with them.
What I have proposed here is what we are planning to do. Now we are often going back to people who are leading in their fields to do quality controls with them.
We have qualified people here and outside - so what about contractors etc. If we ask Nafaka we need to ask about collecting their data... Is IFPRI separate from IITA, could they manage this with contractors.
About phasing in and RCTs... this can happen where you can see impact quickly - which is attractive for donors. In agriculture it will take a few years the impact of what we're doing...
We need to decide about our research design - if we decide we have to think about SI etc. we have to design an evaluation program... I can't see how we can implement this.
Africa RISING M&E Expert Meeting
5-7 September 2012Large auditorium, ILRI Ethiopia, Addis Ababa
Back to the event agenda
Differentiating M&E for more effective monitoring and evaluation
Feed the Future indicators presentation
FTF Indicators Session IV Ender.ppt
- Details
- Download
- 385 KB
Comment: We heard that we had to forget about all previous messages about indicators etc. from USAid.
Q: About farmer satisfaction - what about scientists' satisfaction in producing materials (that are not recognized) - where do they fit here? Measuring outputs of scientists in the project?
A: I doubt that USAid would track this kind of indicators. There could be new indicators entered in but it wouldn't go past the stage of being entered in.
What's very effective to increase/maintain support are compelling narratives which might include custom indicators.
Q: What about social capital indicators about e.g. sharing power? Are there any in there?
A: No. A lot of people nominated indicators and they made choices on what is generalizable across projects etc. This kind of specific indicators is useful in a given project but not in FtF.
Africa RISING M&E planned activities
M and E strategy.ppt
- Details
- Download
- 577 KB
Timeline: moving on from characterization etc. to evaluation and survey design until baseline survey. In M&E we differentiate monitoring (keeping track of the program outputs) and evaluation (the impact of Africa RISING) and then creating knowledge of what works (M&E) to assess effectiveness, rank policy/project alternatives etc.
For monitoring we need constant and timely information and close collaboration among CG implementers and IFPRI. What to do: Outcome mapping? cost-benefit analysis? Surveys? And evaluation must be thought in advance...
For evaluation: How would you go about measuring the causal impact of Africa RISING on... productivity? We need to look at impact in the sense of how it differs between program beneficiaries and non-beneficiaries.
We are assuming that all farmers in target villages are beneficiaries.
Questions from the presenter:
Q: The way we frame our work could affect our measurement. Preaching productivity as an output is a problem, conceptually. We frame resarch outputs as varieties that are bred etc. and then we engage in adoption pathways etc. once they adopt it you get to see productivity growth etc.
A: I'm only pointing out that productivity increase is one of the major outcomes. We have to look at adoption. Our theory of change is that productivity should increase.
Q: There's a lot of questions about terminology e.g. outputs. People have a different perceptions about e.g. inputs (in the presentation graph, it's a donor perspective that inputs are funding). I agree that productivity increase happens as an outcome.
Q: The research framework looks at situational analysis (looking at household characterization), the testing of technology, scaling out (process to take from outputs to outcomes) etc. so the output is one definition of how we think we might move to outcomes.
A: Along these lines, evaluation considers that productivity increase is an outcome.
We have to evaluate if farmers are adopting technology. We might want to look into facilitating spillover effects as well for non beneficiary farmers in treatment sites. DfID is interested in another parallel program on scaling and market.
USAid is also interested in linking up with other programs, USAid missions etc. so we need a larger design. It gets big and messy but by focusing on this it allows us to look into other scales. Africa RISING should attract funding from other donors or missions.
RCTs don't usually give you useful information, unless if you link them with reducing farmer ??
At ATA we are testing hypotheses that certain approaches are better than others. Research and on-farm trials have shown that raw planting generates high yield. We've done this, selected farmers and are testing hypothesis that raw planting is superior. There are market and other implications. It gets complicated but we want to validate in reality if a technology is superior or not.
Comment: all these pieces can fit together. We have to think about what Africa RISING could do... We have to be clear on which questions we're testing.
Q: Open issue - which data to collect and who will collect the data?
A: This will be addressed next.
Q: For technologies, there's no way to trick farmers about fake technology...
A: The assumption is that we should compare technology which has been scientifically tested and 'fake technology' which means technology that is simply not scientifically tested.
--> But in the short run it's going to damage us in the short run.
--> We can also use placebo in time.
Leonard: we have a presentation about RCTs etc. which could address this.
Presentations about research design and opportunities for evaluation
Research Design_USAID Africa RISING M & E Meeting Addis Ababa_Sept 5-7 2012.ppt
- Details
- Download
- 1 MB
Q: How can you evaluate impact that satisfies the demand (to qualify that you are demand-responsive)?
A: We can aggregate data re: market demand in that area.
In a demand side intervention, we offer an 'intention to treat'.
Q: Do you have a typology e.g. farmers produce, sell and consume sthg...?
A: By characterizing farmers , looking at avg treatment effect we can look at various criteria.
We can do ?? with 3000 farmers and down to 10 households. CRS, World Vision etc. will do the rest of this survey.
--> But we talked about co-investment proposals etc. How do we do this? --> Do we want to generate this evidence or do we stick with our traditional zones? Re: partnerships, partners have resources to reach those other farmers etc. --> I don't see how this fits with our activities... We'll do market studies etc. and then value chain analysis (the demand side) and combine with technologies, evaluate with a range of hh's and then control and develop our evidence and THEN we can feed into that process. We're dealing with whole system research.
--> Nobody says we will go to 300 villages but our evaluation has to be credible. --> Yes but we need to find a way to combine the two.
RCT is icing on the cake.
Q: Implementers can help us? How does that tie in with our evaluation design? Will they be collecting information for us?
A: My guess is: Nafaka will do implementation anyhow. The way they're working is not about having good science - that's where we can influence them.They have already run surveys etc. but I'm wondering how this works with the IFPRI M&E work.
--> Are they doing the implementation of the research part or only the value chain part? We can put a research supervisor in Morogoro.
Q: Randomization ok but the sampling you have is from 2000. It's 12 years ago... households have moved on. How can you claim it's still random if it's so old?
A: The agricultural subsidy program updates these figures every year so it's up to date data.
We have to get treatment and control villages that are as similar as possible. We need legitimate counterfactuals. PSM is a second best option... If you want to scale up the program we shouldn't work on a self-selected sample to avoid the bias.
We should have a bigger sample size of control group.
If you randomize districts etc. then it's more likely that the districts are more different so this could bias the impact evaluation... Why don't we randomize villages with similar agro-ecological conditions? --> But the district is where the Multistakeholder platform is.
We cannot use Nafaka data, can we? --> Yes we should talk to them but if we are responsible to have a rigorous evaluation, we should have control over their research design...
Q: You said Nafaka are implementing and you collect data?
A: It's a possibility but we need to discuss it with them.
What I have proposed here is what we are planning to do. Now we are often going back to people who are leading in their fields to do quality controls with them.
We have qualified people here and outside - so what about contractors etc. If we ask Nafaka we need to ask about collecting their data... Is IFPRI separate from IITA, could they manage this with contractors.
About phasing in and RCTs... this can happen where you can see impact quickly - which is attractive for donors. In agriculture it will take a few years the impact of what we're doing...
We need to decide about our research design - if we decide we have to think about SI etc. we have to design an evaluation program... I can't see how we can implement this.